A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation.
Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).
Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".
Today, supercomputers are typically one-of-a-kind custom designs produced by traditional companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. Currently, Japan's K computer, built by Fujitsu in Kobe, Japan is the fastest in the world.[2] It is three times faster than previous one to hold that title, the Tianhe-1A supercomputer located in China.
The term supercomputer itself is rather fluid, and the speed of earlier "supercomputers" tends to become typical of future ordinary computers. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs (see Transputer by instance). Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. The architecture of today's supercomputers is implemented using highly-tuned computer clusters with thousands of commodity processors intercommunicating with custom interconnects.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[3] The CDC 6600, released in 1964, is generally considered the first supercomputer.[4][5]
Cray left CDC in 1972 to form his own company.[6] Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it become one of the most successful supercomputers in history.[7][8] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's fastest until 1990.[9]
While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaflops per processor.[10][11] The Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[12][13][14] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[15]
The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[16]
Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[17]
In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.[18]
This is a recent list of the computers which appeared at the top of the Top500 list,[19] and the "Peak speed" is given as the "Rmax" rating. For more historical data see History of supercomputing.
Year | Supercomputer | Peak speed (Rmax) |
Location |
---|---|---|---|
2008 | IBM Roadrunner | 1.026 PFLOPS | DoE-Los Alamos National Laboratory, New Mexico, USA |
1.105 PFLOPS | |||
2009 | Cray Jaguar | 1.759 PFLOPS | DoE-Oak Ridge National Laboratory, Tennessee, USA |
2010 | Tianhe-IA | 2.566 PFLOPS | National Supercomputing Center, Tianjin, China |
2011 | Fujitsu K computer | 8.162 PFLOPS | RIKEN, Kobe, Japan |
2011 | Fujitsu K computer | 10.51 PFLOPS | RIKEN, Kobe, Japan |
Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.
A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 Megawatts of electricity.[20] The cost to power and cool the system can be significant, e.g. 4MW at $0.10/KWh is $400 an hour or about $3.5 million per year.
Heat management is a major issue in complex electronic devices, and affects powerful computer systems in various ways.[21] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue.[22] [23][24]
The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray 2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure.[9] However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.[25]
In the Blue Gene system IBM deliberately used low power processors to deal with heat density.[26] On the other hand, the IBM Power 775, released in 2011, has closely packed elements that require water cooling.[27] The IBM Aquasar system, on the other hand uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.[28][29]
The energy efficiency of computer systems is generally measured in terms of "FLOPS per Watt". In 2008 IBM's Roadrunner operated at 376 MFLOPS/Watt.[30][31] In November 2010, the Blue Gene/Q reached 1684 MFLOPS/Watt.[32][33] In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W.[34]
Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1–5 microseconds to send a message between CPUs are typical.
Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers.
Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. In particular, the number 4 supercomputer, Nebulae built by Dawning in China, is based on GPGPUs.[35]
Supercomputers today most often use variants of the Linux operating system as shown by the graph to the right.[36]
Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In a similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray's Unicos, or Linux.
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA and OpenCL.
Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf, WareWulf, and openMosix, which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology.
Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors, each processor of which is SIMD, and with each multiprocessor controlling multiple co-processors. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, the number of simultaneous instructions per SIMD processor, and the type and number of co-processors. Within this hierarchy we have:
As of October 2010 the fastest supercomputer in the world is the K computer which has over 68,000 8-core processors, while Tianhe-1A system at National University of Defense Technology comes at second number with more than 14,000 multi-core processors.
In February 2009, IBM also announced work on "Sequoia," which appears to be a 20 petaflops supercomputer. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100,000 laptops). It is slated for deployment in late 2011.[37] The Sequoia will be powered by 1.6 million cores (specific 45-nanometer chips in development) and 1.6 petabytes of memory. It will be housed in 96 refrigerators spanning roughly 3,000 square feet (280 m2).[38]
Moore's Law and economies of scale are the dominant factors in supercomputer design. The design concepts that allowed past supercomputers to out-perform desktop machines of the time tended to be gradually incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can be done on workstations costing less than 4,000 US dollars as of 2010. Supercomputing is taking a step of increasing density, allowing for desktop supercomputers to become available, offering the computer power that in 1998 required a large room to require less than a desktop footprint.
In addition, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, in particular, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design, which can be programmed to act as one large computer.
A special-purpose supercomputer is a high-performance computing device with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.
Examples of special-purpose supercomputers:
In general, the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as Rpeak in the top500 lists)- which is generally unachievable when running real workloads, or the achievable throughput (derived from benchmarks using the Linpack benchmark and shown as Rmax in the top500 lists). The Linpack benchmark does LU decomposition of a large matrix. The Linpack performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth than Linpack, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.
"Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
The K computer is the worlds fastest supercomputer at 10.51 petaFLOPS. It consists of 88,000 SPARC64 VIIIfx CPUs, and spans 864 server racks. Fujitsu was not able to give the official power consumption of the completed K cluster, but in June, when it reached a one petaflop peak, it consumed 9.89 megawatts, costing $9.89 million dollars a year.[46]
Opportunistic Supercomputing is a form of networked grid computing whereby a “super virtual computer” of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing can not handle traditional supercomputing tasks such as fluid dynamic simulations.
The fastest grid computing system, Folding@home, which is based on BOINC, reported 8.8 petaflops of processing power as of May 2011[update]. Of this, 7.1 petaflops are contributed by clients running on various GPUs, 1.8 petaflops come from PlayStation 3 systems, and the rest from various computer systems.[47]
The BOINC platform hosts a number of distributed computing projects. As of May 2011[update], BOINC recorded a processing power of over 5.5 petaflops through over 480,000 active computers on the network[48] The most active project (measured by computational power), MilkyWay@home, reports processing power of over 700 teraflops through over 33,000 active computers.[49]
As of May 2011[update], GIMPS's distributed Mersenne Prime search currently achieves about 60 teraflops through over 25,000 registered computers.[50] The Internet PrimeNet Server supports GIMPS's grid computing approach, one of the earliest and most successful grid computing projects, since 1997.
Quasi-opportunistic Supercomputing is a form of distributed computing whereby the “super virtual computer” of a large number of networked geographically disperse computers performs huge processing power demanding computing tasks.[51] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[51]
The PlayStation 3 Gravity Grid[52] uses a network of 16 machines, and exploits the Cell processor for the intended application, which is performing astrophysical simulations of large supermassive black holes capturing smaller compact objects. The Cell processor has a main CPU and 6 floating-point vector processors, giving the machine a net of 16 general-purpose machines and 96 vector processors. This cluster was built in 2007 by Dr. Gaurav Khanna, a professor in the Physics Department of the University of Massachusetts Dartmouth with support from Sony Computer Entertainment and is the first PS3 cluster that generated numerical results that were published in scientific research literature.
Also a "quasi-supercomputer" is Google's search engine system with estimated total processing power of between 126 and 316 teraflops, as of April 2004.[53] In June 2006 the New York Times estimated that the Googleplex and its server farms contain 450,000 servers.[54] According to 2008 estimates, the processing power of Google's cluster might reach from 20 to 100 petaflops.[55]
Other notable computer clusters are the flash mob cluster, the Qoscos Grid and the Beowulf cluster. The flash mob cluster allows the use of any computer in the network, while the Beowulf cluster still requires uniform architecture.
IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".
Other PFLOPS projects include one by Narendra Karmarkar in India,[56] a C-DAC effort targeted for 2010,[57] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).[58]
In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012.[59] Meanwhile, IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, based on the Blue Gene architecture which is scheduled to go online in 2011.
Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019.[60] Using the Intel MIC multi-core processor architecture, which is Intel's response to GPU systems, SGI plans to achieve a 500 times increase in performance by 2018 to achieve an exaflop.[61] Samples of MIC chips with 32 cores which combine vector processing units with standard CPU have become available.[61]
On October 11, 2011, the Oak Ridge National Laboratory announced they were building a 20 petaflop supercomputer, named Titan, which will become operational in 2012, the hybrid Titan system will combine AMD Opteron processors with “Kepler” NVIDIA Tesla graphic processing unit (GPU) technologies.[62]
Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two week time span accurately.[63] Such systems might be built around 2030.[64]
The Indian Government has committed Rs 10,000 crore to indigenously develop the world's fastest supercomputer by 2017. The Planning Commission of India has agreed to provide the funds to ISRO and to the Indian Institute of Science (IISc), Bangalore to develop a supercomputer with a performance of 132.8 exaflops. The Indian supercomputer will be used only for enhancing the country's space abilities and to predict monsoon and precise weather inputs to boost agricultural output of the country. The target being set by India is very ambitious while referring to achieving the 'Exaflop' or the next level of computing performance by 2017. ISRO has indeed planned everything very carefully to set such a target for itself. ISRO has already booked key equipments to develop the supercomputer by 2017. Most of the other gadgets will be indigenously developed in India. [65]
Decade | Uses and computer involved |
---|---|
1970s | Weather forecasting, aerodynamic research (Cray-1).[66] |
1980s | Probabilistic analysis,[67] radiation shielding modeling[68] (CDC Cyber). |
1990s | Brute force code breaking (EFF DES cracker),[69]
3D nuclear test simulations as a substitute for legal conduct Nuclear Proliferation Treaty (ASCI Q).[70] |
2010s | Molecular Dynamics Simulation (Tianhe-1A)[71] |
|
|